330 research outputs found

    Using Local Search to Find \MSSes and MUSes

    Get PDF
    International audienceIn this paper, a new complete technique to compute Maximal Satisfiable Subsets (MSSes) and Minimally Unsatisfiable Subformulas (MUSes) of sets of Boolean clauses is introduced. The approach improves the currently most efficient complete technique in several ways. It makes use of the powerful concept of critical clause and of a computationally inexpensive local search oracle to boost an exhaustive algorithm proposed by Liffiton and Sakallah. These features can allow exponential efficiency gains to be obtained. Accordingly, experimental studies show that this new approach outperforms the best current existing exhaustive ones

    Une nouvelle méthode hybride pour calculer tous les MSS et tous les MUS

    Get PDF
    Dans ce papier, nous présentons une nouvelle technique complète permettant le calcul des sous-formules maximales consistantes (MSS) et des formules minimales inconsistantes (MUS) d'un ensemble de clauses booléennes. Cette approche améliore la meilleure technique complète connue de plusieurs manières. Elle utilise à la fois une recherche locale peu coûteuse en ressources et le nouveau concept de clause critique pour améliorer un algorithme complet proposé par Liffiton et Sakallah. Cette hybridation permet l'obtention de gains exponentiels. Ainsi, des résultats expérimentaux montrent que cette nouvelle approche dépasse la meilleure méthode proposée jusque ici

    Preserving Partial Solutions while Relaxing Constraint Networks

    Get PDF
    International audienceThis paper is about transforming constraint net- works to accommodate additional constraints in specific ways. The focus is on two intertwined issues. First, we investigate how partial solutions to an initial network can be preserved from the potential impact of additional constraints. Second, we study how more permissive constraints, which are intended to enlarge the set of solutions, can be accommodated in a constraint network. These two problems are studied in the general case and the light is shed on their relationship. A case study is then investigated where a more permissive additional constraint is taken into account through a form of network relaxation, while some previous partial solutions are preserved at the same time

    A CSP solver focusing on FAC variables

    Get PDF
    International audienceThe contribution of this paper is twofold. On the one hand, it introduces a concept of FAC variables in discrete Constraint Satisfaction Prob- lems (CSPs). FAC variables can be discovered by local search techniques and powerfully exploited by MAC-based methods. On the other hand, a novel syn- ergetic combination schema between local search paradigms, generalized arc- consistency and MAC-based algorithms is presented. By orchestrating a multiple- way flow of information between these various fully integrated search compo- nents, it often proves more competitive than the usual techniques on most classes of instances

    Relax!

    Get PDF
    International audienceThis paper is concerned with a form of relaxation of constraint networks. The focus is on situations where additional constraints are intended to extend a non- empty set of preexisting solutions. These constraints require a speci c treatment since merely inserting them inside the network would lead to their preemption by more restrictive ones. Several approaches to handle these additional constraints are investigated from con- ceptual and experimental points of view

    Suppression de clauses redondantes dans des instances de SAT

    Get PDF
    Dans ce papier, nous étudions comment la suppression d'une partie des clauses redondantes d'une instance de SAT permet d'améliorer l'efficacité des solveurs SAT modernes. Le problème consistant à détecter si une instance contient ou non des clauses redondantes est NP-Complet. Nous proposons une méthode de suppression des clauses redondantes incomplète mais polynomiale que nous utilisons comme pré-traitement pour des solveurs SAT. Cette méthode basée sur la propagation unitaire offre des résultats intéressants notamment sur des instances très difficiles issues du monde réel

    La Compétitivité passe aussi par la fiscalité:Nos idées pour adapter la loi de finances 2013 au pacte de compétitivité

    Get PDF
    Fruit d’une discussion collective, cette note présente les propositions de la Fondation pour réajuster les dispositions de la loi de finances 2013 afin de les rendre cohérentes avec les recommandations du pacte de compétitivité. Ce texte est issu d’une conversation entre Aldo Cardoso, membre du Conseil de surveillance de la Fondation pour l’innovation politique, Michel Didier, Professeur honoraire au CNAM et président de Coe-Rexecode, Bertrand Jacquillat, Professeur à Sciences Po et président d’Associés en Finance, Dominique Reynié et Grégoire Sentilhes, président de NextStage, des Journées de l’Entrepreneur et du G20 YES en France

    PlaStIL: Plastic and Stable Memory-Free Class-Incremental Learning

    Full text link
    Plasticity and stability are needed in class-incremental learning in order to learn from new data while preserving past knowledge. Due to catastrophic forgetting, finding a compromise between these two properties is particularly challenging when no memory buffer is available. Mainstream methods need to store two deep models since they integrate new classes using fine-tuning with knowledge distillation from the previous incremental state. We propose a method which has similar number of parameters but distributes them differently in order to find a better balance between plasticity and stability. Following an approach already deployed by transfer-based incremental methods, we freeze the feature extractor after the initial state. Classes in the oldest incremental states are trained with this frozen extractor to ensure stability. Recent classes are predicted using partially fine-tuned models in order to introduce plasticity. Our proposed plasticity layer can be incorporated to any transfer-based method designed for exemplar-free incremental learning, and we apply it to two such methods. Evaluation is done with three large-scale datasets. Results show that performance gains are obtained in all tested configurations compared to existing methods
    • …
    corecore